About the project

Introduction to Open Data Science 2018 is a course about modern methods and tools in data science.

You can find my course github repository here https://github.com/eavalo/IODS-project/.


Regression and model validation

Data

Read in the dataset. The dataset contains 7 variables:

  • gender: Gender: M (Male), F (Female)
  • age: Age (in years) derived from the date of birth
  • attitude: Global attitude toward statistics
  • deep: Deep approach
  • surf: Surface approach
  • stra: Strategic approach
  • points: Exam points
learning2014 <- read.csv("~/git/IODS-project/data/learning2014.csv")

Check that the data was read correctly. Print out the first few rows and check the type of the columns:

head(learning2014)
##   gender age attitude     deep     surf  stra points
## 1      F  53       37 3.583333 2.583333 3.375     25
## 2      M  55       31 2.916667 3.166667 2.750     12
## 3      F  49       25 3.500000 2.250000 3.625     24
## 4      M  53       35 3.500000 2.250000 3.125     10
## 5      M  49       37 3.666667 2.833333 3.625     22
## 6      F  38       38 4.750000 2.416667 3.625     21
str(learning2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
##  $ age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: int  37 31 25 35 37 38 35 29 38 21 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ points  : int  25 12 24 10 22 21 21 31 24 26 ...

Explore the data by plotting

library(GGally)
library(ggplot2)

# Define the plot.
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha=0.3), 
             lower = list(combo = wrap("facethist", bins = 20)), legend=1)

# Draw the plot
p

There are almost twice as many women in the dataset compared to men. The distribution of age is skewed towards higer values. The distribution of attitutde, deep, surf, stra and points seems to be more normally distributed. The variable with the highest correlation with points is attitude.

summary(learning2014$gender)
##   F   M 
## 110  56
summary(learning2014$age)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   17.00   21.00   22.00   25.51   27.00   55.00
summary(learning2014$attitude)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   14.00   26.00   32.00   31.43   37.00   50.00
summary(learning2014$deep)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.583   3.333   3.667   3.680   4.083   4.917
summary(learning2014$surf)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.583   2.417   2.833   2.787   3.167   4.333
summary(learning2014$stra)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.250   2.625   3.188   3.121   3.625   5.000
summary(learning2014$points)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    7.00   19.00   23.00   22.72   27.75   33.00

Regression analysis

Fit a regression model using points as the outcome variable and age, gender and attitude as explanatory variables:

# Fit the linear regression model
lm_model <- lm(points ~ age + gender + attitude, data=learning2014)

# Sumamry of the model
summary(lm_model)
## 
## Call:
## lm(formula = points ~ age + gender + attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.4590  -3.3221   0.2186   4.0247  10.4632 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.42910    2.29043   5.863 2.48e-08 ***
## age         -0.07586    0.05367  -1.414    0.159    
## genderM     -0.33054    0.91934  -0.360    0.720    
## attitude     0.36066    0.05932   6.080 8.34e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.315 on 162 degrees of freedom
## Multiple R-squared:  0.2018, Adjusted R-squared:  0.187 
## F-statistic: 13.65 on 3 and 162 DF,  p-value: 5.536e-08

Gender and age are not significantly associated to the points so remove them from the model. Fit the model again using only attitude as the explanatory variable.

lm_model_2 <- lm(points ~ attitude, data=learning2014)
summary(lm_model_2)
## 
## Call:
## lm(formula = points ~ attitude, data = learning2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -16.9763  -3.2119   0.4339   4.1534  10.6645 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 11.63715    1.83035   6.358 1.95e-09 ***
## attitude     0.35255    0.05674   6.214 4.12e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared:  0.1906, Adjusted R-squared:  0.1856 
## F-statistic: 38.61 on 1 and 164 DF,  p-value: 4.119e-09

Explanatory variable attitude is statistically significantly associated to the outcome variable points with a p-value of 4.12e-09. The estimate of the regression coefficient is 0.35 meaning a one unit increase in attitude is on average associated to a 1.42 unit increase in points. The R² of the model is 0.19 which means that the variation of the explanatory variable attitude explains 19% of the variation of the outcome variable points.

Diagnostics of the regression model

Plot the residuals versus the fitted values

plot(lm_model_2, which=1)

Plot the normal QQ-plot:

plot(lm_model_2, which=2)

Plot the residuals versus leverage plot:

plot(lm_model_2, which=5)


Logistic regression

Data

The dataset has been constructed by joining two student alchol comsuption datasets which were downloaded from https://archive.ics.uci.edu/ml/datasets/Student+Performance. The two datasets contain the same variables and the students are partially overlapping. The data was joined by using following columns as surrogate identifiers for students:

  • school, sex, age, address, famsize, Pstatus, Medu, Fedu, Mjob, Fjob, reason, nursery, internet

The variables not used for joining have been combined by averaging for numeric columns and by taking the first answer for non-numeric columns. Two new variables have been define:

  • alc_use is the average of Dalc and Walc
  • high_use is TRUE if alc_use is hihger than 2 and FALSE otherwise

Read in the dataset.

alc <- read.csv("~/git/IODS-project/data/alc.csv")

Variables in the data set:

str(alc)
## 'data.frame':    382 obs. of  35 variables:
##  $ school    : Factor w/ 2 levels "GP","MS": 1 1 1 1 1 1 1 1 1 1 ...
##  $ sex       : Factor w/ 2 levels "F","M": 1 1 1 1 1 2 2 1 2 2 ...
##  $ age       : int  18 17 15 15 16 16 16 17 15 15 ...
##  $ address   : Factor w/ 2 levels "R","U": 2 2 2 2 2 2 2 2 2 2 ...
##  $ famsize   : Factor w/ 2 levels "GT3","LE3": 1 1 2 1 1 2 2 1 2 1 ...
##  $ Pstatus   : Factor w/ 2 levels "A","T": 1 2 2 2 2 2 2 1 1 2 ...
##  $ Medu      : int  4 1 1 4 3 4 2 4 3 3 ...
##  $ Fedu      : int  4 1 1 2 3 3 2 4 2 4 ...
##  $ Mjob      : Factor w/ 5 levels "at_home","health",..: 1 1 1 2 3 4 3 3 4 3 ...
##  $ Fjob      : Factor w/ 5 levels "at_home","health",..: 5 3 3 4 3 3 3 5 3 3 ...
##  $ reason    : Factor w/ 4 levels "course","home",..: 1 1 3 2 2 4 2 2 2 2 ...
##  $ nursery   : Factor w/ 2 levels "no","yes": 2 1 2 2 2 2 2 2 2 2 ...
##  $ internet  : Factor w/ 2 levels "no","yes": 1 2 2 2 1 2 2 1 2 2 ...
##  $ guardian  : Factor w/ 3 levels "father","mother",..: 2 1 2 2 1 2 2 2 2 2 ...
##  $ traveltime: int  2 1 1 1 1 1 1 2 1 1 ...
##  $ studytime : int  2 2 2 3 2 2 2 2 2 2 ...
##  $ failures  : int  0 0 2 0 0 0 0 0 0 0 ...
##  $ schoolsup : Factor w/ 2 levels "no","yes": 2 1 2 1 1 1 1 2 1 1 ...
##  $ famsup    : Factor w/ 2 levels "no","yes": 1 2 1 2 2 2 1 2 2 2 ...
##  $ paid      : Factor w/ 2 levels "no","yes": 1 1 2 2 2 2 1 1 2 2 ...
##  $ activities: Factor w/ 2 levels "no","yes": 1 1 1 2 1 2 1 1 1 2 ...
##  $ higher    : Factor w/ 2 levels "no","yes": 2 2 2 2 2 2 2 2 2 2 ...
##  $ romantic  : Factor w/ 2 levels "no","yes": 1 1 1 2 1 1 1 1 1 1 ...
##  $ famrel    : int  4 5 4 3 4 5 4 4 4 5 ...
##  $ freetime  : int  3 3 3 2 3 4 4 1 2 5 ...
##  $ goout     : int  4 3 2 2 2 2 4 4 2 1 ...
##  $ Dalc      : int  1 1 2 1 1 1 1 1 1 1 ...
##  $ Walc      : int  1 1 3 1 2 2 1 1 1 1 ...
##  $ health    : int  3 3 3 5 5 5 3 1 1 5 ...
##  $ absences  : int  5 3 8 1 2 8 0 4 0 0 ...
##  $ G1        : int  2 7 10 14 8 14 12 8 16 13 ...
##  $ G2        : int  8 8 10 14 12 14 12 9 17 14 ...
##  $ G3        : int  8 8 11 14 12 14 12 10 18 14 ...
##  $ alc_use   : num  1 1 2.5 1 1.5 1.5 1 1 1 1 ...
##  $ high_use  : logi  FALSE FALSE TRUE FALSE FALSE FALSE ...

Predictors of high alchol consumption

Hypothesis

I choose the following 4 variables for studying their relationship to high/low alcohol consumption:

  • goout - going out with friends (numeric: from 1 - very low to 5 - very high)
  • sex - student’s sex (binary: ‘F’ - female or ‘M’ - male)
  • studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours)
  • romantic - with a romantic relationship (binary: yes or no)

I hypotize that high amount of “going out with friends” would be associated with a higer alcohol consumption since alcohol is often consumed in social situations. I think male gender could be associated to higher alcohol consumption just because males can tolerate more alcohol. I hypotize that higher “weekly study time” is associated to low alcohol compsuption: students with high study times are focused on school and don’t have as much time to drink alcohol. I also hypotize that individuals “with a romatic relationship” will consume less aclhol since they spend more time with their partners than friens and alchol is more often spend with friends.

Data exploration

Explore the variables of interest in regard to alchohol use

library(tidyr)
library(dplyr)
library(ggplot2)

# Explore the mean 'goout' and 'studytime' with respect to high/low alcohol use
alc %>% group_by(high_use) %>% summarise(count = n(), mean_goout=mean(goout),
                                         mean_studytime=mean(studytime))
## # A tibble: 2 x 4
##   high_use count mean_goout mean_studytime
##   <lgl>    <int>      <dbl>          <dbl>
## 1 FALSE      268       2.85           2.15
## 2 TRUE       114       3.72           1.77
# Explore high/low alcohol use stratified by 'sex'
alc %>% group_by(high_use, sex) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups:   high_use [?]
##   high_use sex   count
##   <lgl>    <fct> <int>
## 1 FALSE    F       156
## 2 FALSE    M       112
## 3 TRUE     F        42
## 4 TRUE     M        72
# Explore high/low alcohol use stratified by 'romantic'
alc %>% group_by(high_use, romantic) %>% summarise(count = n())
## # A tibble: 4 x 3
## # Groups:   high_use [?]
##   high_use romantic count
##   <lgl>    <fct>    <int>
## 1 FALSE    no         180
## 2 FALSE    yes         88
## 3 TRUE     no          81
## 4 TRUE     yes         33
# Draw barplots of variables of interest
###########################################

g_goout <- ggplot(alc, aes(x = goout, fill=high_use)) +
  geom_bar() + xlab("Going out with friends") +
  ggtitle("Going out with friends from 1 (very low) to 5 (very high) by alcohol use")

g_studytime <- ggplot(alc, aes(x = studytime, fill=high_use)) +
  geom_bar() + xlab("Weekly study time") +
  ggtitle("Weekly study time [1 (<2 hours), 2 (2 to 5 hours), 3 (5 to 10 hours), or 4 (>10 hours)] by alchol use")

g_sex <- ggplot(alc, aes(x = sex, fill=high_use)) +
  geom_bar() + 
  ggtitle("Sex by alcohol use")

g_romantic <- ggplot(alc, aes(x = romantic, fill=high_use)) +
  geom_bar() + 
  ggtitle("With a romantic relationship (yes/no) by alcohol use")

# Arrange the plots into a grid
library("gridExtra")
grid.arrange(g_goout, g_studytime, g_sex, g_romantic, ncol=2, nrow=2)

Based on the plots my assumptions seem to be somewhat corret:

  • High alchol use is associated with going out with friends
  • Low alchol use is asscoated with high weekly study time
  • High alchol use is more common in men compared to women
  • Low alchol use is more common in individuals in romatic relationships

Fitting a logistic regression model

Fit a logistic regression model using high_use as the targe variable and goout, studytime, sex and romantic as the explanatory variables.

# Fit the logistic regression model
m <- glm(high_use ~ goout + studytime + sex + romantic, data = alc, family = "binomial")

Summary of the fitted logistics regression model shows that goout, studytime and sex are statistically significantly associated to alchol comsumption. High alcohol comsumption is associated to high goout and male gender and low alcohol comsumption is associated to high studytime.

# Summary of the model
summary(m)
## 
## Call:
## glm(formula = high_use ~ goout + studytime + sex + romantic, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.7365  -0.8114  -0.5009   0.9081   2.6642  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -2.6988     0.5712  -4.725 2.30e-06 ***
## goout         0.7536     0.1187   6.350 2.15e-10 ***
## studytime    -0.4774     0.1683  -2.837  0.00456 ** 
## sexM          0.6657     0.2585   2.576  0.01000 *  
## romanticyes  -0.1424     0.2699  -0.528  0.59767    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 393.67  on 377  degrees of freedom
## AIC: 403.67
## 
## Number of Fisher Scoring iterations: 4

Coefficients of the model as odds ratios and their confidence intervals:

# Calculate the odds ratios and confidence intervals of the coefficients
or <- coef(m) %>% exp
ci <- confint(m) %>% exp
# Print out the odds ratios and confidence intervals
cbind(or, ci)
##                     or      2.5 %    97.5 %
## (Intercept) 0.06728696 0.02129867 0.2010636
## goout       2.12456419 1.69404697 2.7003422
## studytime   0.62040325 0.44145946 0.8558631
## sexM        1.94589655 1.17538595 3.2443631
## romanticyes 0.86724548 0.50714091 1.4648961

From the odds ratios we can see that one unit increase in goout is associated with 2.1 times higher likelihood of high alchohol comsumption and one unit increase in studytime is associated with 0.6 times lower likelihood of high alcohol comsumption. Male gender is associated with 1.9 higer likelihood of high alcohol comsumption comapred to female gender. Being in a romatic relationship is not significantly associated to high/low alcohol comsuption in this model since the confidence intervals include 1.

My previously stated hypothesis seem to be verified by this model except that romantic is not associated to high or low alcohol comsumption in this model.

Performance of the model

Fit a logistic model with the explanatory variables that were statistically significantly associated to high or low alchohol consumption:

# Fit the logistic regression model
m <- glm(high_use ~ goout + studytime + sex, data = alc, family = "binomial")

Prediction performance of the model

# Calcualte the predicted probabilities of high alcohol comspumtion
probability <- predict(m, type="response")
alc <- mutate(alc, probability=probability)
# Predict the high alcohol use with the probabilities
alc <- mutate(alc, prediction=probability > 0.5)
# Cross-tabulate the actual class and the predicted class
table(high_use = alc$high_use, prediction = alc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   250   18
##    TRUE     76   38

The model seems to be quite good at predicting low alcohol use but performes less well in predicting high alcohol use.

Visualize the actual class, the predicted probabilities and the predicted class.

# Initialize a plot of 'high_use' versus 'probability' in 'alc'
g <- ggplot(alc, aes(x = probability, y = high_use, col=prediction))

# define the geom as points and draw the plot
g + geom_point()

Calculate the total proportion of mis-classified individuals using the regression model and with a simple guessing strategy where everyone is classified to be in the most prevalent class low use of alcohol.

# Define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# Call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2460733
# Compare the results to guessing that everybody belongs to the class low use of alcohol
loss_func(class = alc$high_use, prob = 0)
## [1] 0.2984293

Using the regression model 24.6% of the individuals are mis-classified compared to 29.8 % of mis-classified individuals when simply guessing everybody belongs to the low use of alcohol class. The model seems to provide some improvement to the simple guess of the most prevalent class.

Cross-validation of the model

Performe 10-fold cross-validation of the model to estiamte the performance of the model on unseen data. The performance of the model is measured with proportion of mis-classified individuals. The mean prediction error in the test set:

library(boot)
# 10-fold cross-validation
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
# Mean prediction error
cv$delta[1]
## [1] 0.2565445

The mean prediction error in the test set is 0.25 which is better than the performance of the model introduced in the DataCamp exercises which had a mean prediction error of 0.26 in the test set.

Models with different number of predictors

Construct models with different number of predictors and calculate the test set and training set prediction errors.

# All the possible predictors
predictors <- c('school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu',
                'Fedu', 'Mjob', 'Fjob', 'reason', 'nursery', 'internet', 'guardian',
                'traveltime', 'studytime', 'failures', 'schoolsup', 'famsup', 'paid',
                'activities', 'higher', 'romantic', 'famrel', 'freetime', 'goout',
                'health', 'absences', 'G1', 'G2', 'G3')


# Fit several models and record the test and traingin errors
# 1) Use all of the predictors.
# 2) Drop one predictor and fit a new model.
# 3) Continue until only one predictor is left in the model.


# Fit the models and calculate the erros
test_error <- numeric(length(predictors))
training_error <- numeric(length(predictors))

for(i in length(predictors):1) {
  model_formula <- paste0("high_use ~ ", paste(predictors[1:i], collapse = " + "))
  glmfit <- glm(model_formula, data = alc, family = "binomial")

  # 10-fold cross-validation
  cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
  # Mean prediction error
  test_error[i] <- cv$delta[1]
  # Training error
  training_error[i] <- 
    loss_func(alc$high_use, predict(glmfit,type="response"))
}

# Construct a table of prediction errors for plotting
data_error <- rbind(data.frame(n_predictors=1:length(predictors),
                               prediction_error=test_error,
                               type = "test error"),
                    data.frame(n_predictors=1:length(predictors),
                               prediction_error=training_error,
                               type = "training error"))

                    
# Plot the test and training errors vs. number of predictors in the model
g <- ggplot(data_error, aes(x = n_predictors, y = prediction_error, col=type))

# define the geom as points and draw the plot
g + geom_point()


Clustering and classification

# Load libraries
library(corrplot)
library(dplyr)

Data

Load in the Boston dataset from the MASS package.

library(MASS)
data("Boston")
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...

The dataset has 14 variables and 506 observations. The following variables are present:

  • crim - per capita crime rate by town.
  • zn - proportion of residential land zoned for lots over 25,000 sq.ft.
  • indus - proportion of non-retail business acres per town.
  • chas - Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).
  • nox - nitrogen oxides concentration (parts per 10 million).
  • rm - average number of rooms per dwelling.
  • age - proportion of owner-occupied units built prior to 1940.
  • dis - weighted mean of distances to five Boston employment centres.
  • rad - index of accessibility to radial highways.
  • tax - full-value property-tax rate per $10,000.
  • ptratio - pupil-teacher ratio by town.
  • black - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town.
  • lstat - lower status of the population (percent).
  • medv - median value of owner-occupied homes in $1000s.

More details of the dataset can be found here https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html.

Data exploration

Summary of the variabes in the dataset:

summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08204   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Explore the distribution of the variables by plotting:

library(GGally)
library(ggplot2)

# Define the plot.
p <- ggpairs(Boston, mapping = aes(alpha=0.3), 
             lower = list(combo = wrap("facethist", bins = 20)))

# Draw the plot
p

Correlation of the variables:

cor(Boston) %>% corrplot(method="circle", type="upper", cl.pos="b", tl.pos="d")

Data wrangling

Scale the dataset so that the mean of each variable is zero and standard deviation is one:

\[x_{scaled}=\frac{x - \mu_{x}}{\sigma_{x}}\],

where \(\mu_{x}\) is the mean of x and \(\sigma_{x}\) the standard deviation of x.

boston_scaled <- scale(Boston) %>% as.data.frame()
summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

From the summary we can see that the mean of the scaled variables is zero.

Create a factor variable críme from the crim (per capita crime rate by town) by dividing the crim variable by quartiles to ‘low’, ‘med_low’, ‘med_high’ and ‘high’ categories:

# Create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)

# Create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, 
             label=c("low", "med_low", "med_high", "high"))

# Remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# Add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

Divide the dataset to training and test sets so that 80% belongs to the training set and 20% belongs to the test set.

# Set seed so the results are reproducible
set.seed(1234)
# Take randomly 80% of the observations to the training set
train.idx <- sample(nrow(boston_scaled), size = 0.8 * nrow(boston_scaled))
train <- boston_scaled[train.idx,]
# Take the remaining 20% to the test set
test <- boston_scaled[-train.idx,]

Linear discriminant analysis

Fit the linear discriminant analysis (LDA) on the training set using the categorical crime rate as the target variable and all the other variables in the dataset as predictor variables.

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

The LDA biplot:

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col=classes, pch=classes)
lda.arrows(lda.fit, myscale = 2)

Use the fitted LDA model to predict the categorical crime rate in the test set. Cross tabulate the observed classes and the predicted classes in the test set:

# Save the correct classes from test data
correct_classes <- test$crime

# Remove the crime variable from test data
test <- dplyr::select(test, -crime)

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       11       8        0    0
##   med_low    7      19        2    0
##   med_high   1       6       18    1
##   high       0       0        0   29

Model seems to perform perfectly at predicting the ‘high’ class and also predicts the other classes reasonably well. The prediction accuracy is worst for the ‘low’ class. The model mis-classifies a big proportion of the ‘low’ observations as ‘med_low’.

K-means clustering

Reload the Boston dataset and standardize it as before. Calculate the Euclidean distance between the observations:

# Load the Boston dataset
data("Boston")

# Scale the dataset
boston_scaled <- scale(Boston) %>% as.data.frame()

# Calculate the Euclidean distance between the pairs of observations
dist_eu <- dist(boston_scaled)
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4620  4.8240  4.9110  6.1860 14.4000

Run the k-means algorithm with 3 clusters and visualize the results:

# Set seed to get reproducible results
set.seed(123)

# k-means clustering
km <-kmeans(boston_scaled, centers = 3)

# plot the Boston dataset with clusters
pairs(boston_scaled, col = km$cluster)

Calculate te total of within cluster sum of squares (TWCSS) when the number of cluster changes from 1 to 10.

# Set seed to get reproducible results
set.seed(123)

# Determine the number of clusters
k_max <- 10

# Calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

The optimal number of clusters is when the total WCSS drops radically so based on the graph 2 seems to be the optimal number of clusters. Perform k-means with 2 clusters and visualize the results.

# Set seed to get reproducible results
set.seed(123)

# k-means clustering
km <-kmeans(boston_scaled, centers = 2)

# plot the Boston dataset with clusters
pairs(boston_scaled, col = km$cluster)

LDA of the k-means clusters

Perform k-means clustering with 3 clusters on the scaled Boston datset. Use the cluster asigments as the target variable for LDA analysis.

# Set seed to get reproducible results
set.seed(123)

# k-means clustering
km <-kmeans(boston_scaled, centers = 3)

# Add the cluster assingment to the dataset
boston_scaled$kmeans_cluster <- km$cluster

# linear discriminant analysis
lda.fit <- lda(kmeans_cluster ~ ., data = boston_scaled)

The LDA biplot:

# plot the lda results
plot(lda.fit, dimen = 2, col=boston_scaled$kmeans_cluster, 
     pch=boston_scaled$kmeans_cluster)
lda.arrows(lda.fit, myscale = 2)

Based on the biplot the most influencal linear separators are:

  • age - proportion of owner-occupied units built prior to 1940.
  • dis - weighted mean of distances to five Boston employment centres.
  • rad - index of accessibility to radial highways.
  • tax - full-value property-tax rate per $10,000.

Dimensionality reduction techniques

# Load libraries
library(corrplot)
library(dplyr)
library(tidyr)

Data

Load the dataset from file.

human <- read.csv('data/human.csv', row.names = 1)
head(human)
##               Edu2.FM   Labo.FM Edu.Exp Life.Exp   GNI Mat.Mor Ado.Birth
## Norway      1.0072389 0.8908297    17.5     81.6 64992       4       7.8
## Australia   0.9968288 0.8189415    20.2     82.4 42261       6      12.1
## Switzerland 0.9834369 0.8251001    15.8     83.0 56431       6       1.9
## Denmark     0.9886128 0.8840361    18.7     80.2 44025       5       5.1
## Netherlands 0.9690608 0.8286119    17.9     81.6 45435       6       6.2
## Germany     0.9927835 0.8072289    16.5     80.9 43919       7       3.8
##             Parli.F
## Norway         39.6
## Australia      30.5
## Switzerland    28.5
## Denmark        38.0
## Netherlands    36.9
## Germany        36.9
str(human)
## 'data.frame':    155 obs. of  8 variables:
##  $ Edu2.FM  : num  1.007 0.997 0.983 0.989 0.969 ...
##  $ Labo.FM  : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Edu.Exp  : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ Life.Exp : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ GNI      : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Mat.Mor  : int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Ado.Birth: num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ Parli.F  : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...

The dataset contains 155 observation of 8 variables. The dataset combines several indicators from most countries in the world. The countries are the rownames of the data.frame and the variables are:

Health and knowledge

  • GNI - Gross National Income per capita
  • Life.Exp - Life expectancy at birth
  • Edu.Exp - Expected years of schooling
  • Mat.Mor - Maternal mortality ratio
  • Ado.Birth - Adolescent birth rate

Empowerment

  • Parli.F - Percetange of female representatives in parliament
  • Edu2.FM - Ratio of females/males with at least secondary education
  • Labo.FM - Ratio of females/males in the labour force

Data exploration

Visualize the distribution of the variabels and their depencies

library(GGally)
library(ggplot2)

# Define the plot.
p <- ggpairs(human, mapping = aes(alpha=0.3), 
             lower = list(combo = wrap("facethist", bins = 20)))

# Draw the plot
p
Figure 1. Pairs-plot of the variables in the dataset.

Figure 1. Pairs-plot of the variables in the dataset.

Next visualize the correlation between the variables

cor(human) %>% corrplot()
Figure 2. Correlation of the variables in the dataset.

Figure 2. Correlation of the variables in the dataset.

Summaries of the variables

summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50

Principal component analysis

Perform principal component analysis (PCA) using singular value decomposition (SVD) method for un-standardized dataset. The variability captured by the principal components:

# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human)

# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6    PC7
## Standard deviation     1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00 1.000 1.000 1.0000
##                           PC8
## Standard deviation     0.1591
## Proportion of Variance 0.0000
## Cumulative Proportion  1.0000
# rounded percentages of variance captured by each PC
pca_pr <- round(100*s$importance[2,], digits = 1) 

# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.7,1), col=c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
Figure 3. Countries plotted againts the two first principal components for the un-standardized dataset.

Figure 3. Countries plotted againts the two first principal components for the un-standardized dataset.

Next, standardize the dataset so that every variable has a mean of 0 and a standard deviation of 1 and perform PCA on the standardized dataset. The summary for the standardized variables

# standardize the variables
human_std <- scale(human)

# print out summaries of the standardized variables
summary(human_std)
##     Edu2.FM           Labo.FM           Edu.Exp           Life.Exp      
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850

The variability captured by the principal components

# perform principal component analysis (with the SVD method)
pca_human_std <- prcomp(human_std)

# create and print out a summary of pca_human
s <- summary(pca_human_std)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069
##                            PC7     PC8
## Standard deviation     0.45900 0.32224
## Proportion of Variance 0.02634 0.01298
## Cumulative Proportion  0.98702 1.00000
# rounded percentages of variance captured by each PC
pca_pr <- round(100*s$importance[2,], digits = 1) 

# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot of the principal component representation and the original variables
biplot(pca_human_std, choices = 1:2, cex=c(0.7,1), col=c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2])
Figure 4. Countries plotted againts the two first principal components for the standardized dataset.

Figure 4. Countries plotted againts the two first principal components for the standardized dataset.

The expected years of education, life expectancy, gross-domestic product and the ratio of women to men in education seem to correlate with each other as well as PC1. Maternal mortality and adolescent birth rates are inversly correlated to the former variables and also correlated with PC1. The ratio of women to men in the work life is correlated with the fraction of women in the parliament and this is correlated with PC2.

The first pricipal component seems to separate the countries based on variables related to health, education and wealth. The second pricipal component captures the variability in the participation of women to work and polital life in the society.

Tea dataset: Multiple Corresondence Analysis

Load the ‘tea’ dataset from the package FactoMineR. The dataset represents a questionnaire on tea made in 300 inviduals: how they drink tea (18 questions), what are their product’s perception (12) and some personal details. The structure of the dataset:

library("FactoMineR")
data(tea)
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...

The dataset contains 300 observations of 36 variables. Visualize the dataset:

# visualize the dataset
gather(tea) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 10))
Figure 4. Variables and the distribution of their values in the 'tea' dataset.

Figure 4. Variables and the distribution of their values in the ‘tea’ dataset.

Perform multiple correspondance analysis (MCA) on the dataset using a subset of variables: Tea, How, how, sugar, where, lunch:

# Column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# Select the 'keep_columns' to create a new dataset
tea_time <- dplyr::select(tea, one_of(keep_columns))

# Multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# Summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6
## Variance               0.279   0.261   0.219   0.189   0.177   0.156
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953
##                        Dim.7   Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.144   0.141   0.117   0.087   0.062
## % of var.              7.841   7.705   6.392   4.724   3.385
## Cumulative % of var.  77.794  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898
##                       cos2  v.test     Dim.3     ctr    cos2  v.test  
## black                0.003   0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            0.027   2.867 |   0.433   9.160   0.338  10.053 |
## green                0.107  -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone                0.127  -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                0.035   3.226 |   1.329  14.771   0.218   8.081 |
## milk                 0.020   2.422 |   0.013   0.003   0.000   0.116 |
## other                0.102   5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag              0.161  -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged   0.478  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged           0.141  -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

The first dimension explains 15% of the variation and the second dimension 14% variation in the data. Variables how and where have the strongest link to the first and second dimension out of all the analyzed variables.

Visualize the MCA results:

# Visualize MCA
plot(mca, invisible=c("ind"), habillage= "quali")
Figure 5. Variable biplot of the MCA analysis results on the 'tea' dataset with variables *Tea*, *How*, *how*, *sugar*, *where*, *lunch*.

Figure 5. Variable biplot of the MCA analysis results on the ‘tea’ dataset with variables Tea, How, how, sugar, where, lunch.

Based on the plot individuals who use unpackaged tea also tend to buy their tea from the tea shops and prefer green tea. On the other individuals who use tea bags buy them often from chain stores.